14 research outputs found

    Unconventional Applications of Compiler Analysis

    Get PDF
    Previously, compiler transformations have primarily focused on minimizing program execution time. This thesis explores some examples of applying compiler technology outside of its original scope. Specifically, we apply compiler analysis to the field of software maintenance and evolution by examining the use of global data throughout the lifetimes of many open source projects. Also, we investigate the effects of compiler optimizations on the power consumption of small battery powered devices. Finally, in an area closer to traditional compiler research we examine automatic program parallelization in the form of thread-level speculation

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Distributed models of thread-level speculation

    No full text
    Abstract — This paper introduces a novel application of thread-level speculation to a distributed heterogeneous environment. We propose and evaluate two speculative models which attempt to reduce some of the method call overhead associated with distributed objects. Threadlevel speculation exploits parallelism in code which is not provably free of data dependencies. Our evaluation of the application of thread-level speculation to client-server applications resulted in substantial performance increases, on the order of 3 times for our initial model, and 21 times for the second. I

    Recovering Maintainability Effort in the Presence of Global Data Usage ∗

    No full text
    As the useful life expectancy of software continues to increase, the task of maintaining the source code has become the dominant phase of the software life-cycle. In order to improve the ability of software to age and successfully evolve over time, it is important to identify system design and programming practices which may result in increased difficulty for maintaining the code base. This study attempts to correlate the use of global variables to the maintainability of several widely deployed, large scale software projects as they evolve over time. Two measures are proposed to quantify the maintenance effort of a project. The first measure compares the number of CVS revisions for all source files in a release to the number of revisions applied to files where the usage of global data is most prevalent. A second degree of change is characterized by contrasting the amount of source code that was changed overall to the changes made to those source files which contain the majority of the references to global data. We observed a strong correlation between the number of revisions to global variables references and lines of code to global variable references. In all cases the correlation between the number of revisions and global variable references was stronger. This provides evidence that a strong relationship exists between the usage of global variables and both the number and scope of changes applied to files between product releases. 1
    corecore